perm filename SEARLE[W90,JMC] blob
sn#885130 filedate 1990-06-07 generic text, type C, neo UTF8
COMMENT ā VALID 00004 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 %searle[w90,jmc] Notes for Chinese Room symposium
C00005 00003
C00010 00004 RETURN TO SEARLE
C00011 ENDMK
Cā;
%searle[w90,jmc] Notes for Chinese Room symposium
John Searle begins his (1990)
``Consciousness, Explanatory Inversion and Cognitive Science''
with
``Ten years ago in this journal I published an
article (Searle, 1980a and 1980b) criticising what I
call Strong AI, the view that for a system to have
mental states it is sufficient for the system to
implement the right sort of program with right inputs
and outputs. Strong AI is rather easy to refute and
the basic argument can be summarized in one sentence: {\it a
system, me for example, could implement a program for
understanding Chinese, for example, without
understanding any Chinese at all.} This idea, when
developed, became known as the Chinese Room Argument.''
The Chinese Room Argument can be refuted in one sentence:
{\it Searle confuses the mental qualities of one computational
process, himself for example, with those of another process that
the first process might be interpreting, a process that
understands Chinese, for example.}
That accomplished, the lecture will
discuss the ascription of mental qualities to machines
with special attention to the relation between syntax and
semantics, i.e. questions suggested by the Chinese Room
Argument. I will deal explicitly with Searle's four ``axioms'',
which, although they don't have a unique interpretation, suggest
various ideas worth discussing.
The intuition behind the opinion that computation can't be thinking.
Eliza isn't thinking, and it isn't even on the road to thinking.
The algorithm for a successful Chinese room involves
a lot of declarative information.
``Once we get out of that confusion, once we escape the clutches
of two thousand years of dualism, we can see that consciousness
is a biological phenomenon like any other and ultimately our
understanding out it is most likely to come through biological
investigation''
John Searle - New York Review of Books, letter pp 58-59, 1990
June 14.
The discussion of the Chinese Room has remained at an
excessively high level on both sides. I propose to discuss
what would actually be involved in a set of rules for conducting
a conversation in Chinese, independently of whether these rules
are to be carried out by a human or a machine.
First we must exclude various forms of cheating that
aren't excluded by Searle's formulation of the problem.
1. We need to exclude a system like Weizenbaum's
Eliza that merely looks for certain words in the input
and makes certain syntactic transformations on each sentence
to generate an output sentence. I wouldn't count such a
program as understanding Chinese, and {\it a fortiori} Searle
wouldn't either. The program must respond as though it
knew the facts that would be familiar to an educated Chinese.
2. If the rules are to be executed by a human, they
must not involve translating what was said into English, e.g.
by giving the dictionary entries for the characters. If
this were done, the English speaker could use his own understanding
of the facts of the world to generate English responses
that he then translates into Chinese. The database of facts
must not be in English. We also suppose that the human is
not allowed to do cryptanalysis to translate the inputs or
the database into English.
This eliminates the forms of cheating that I can
think of, but I don't guarantee that there aren't others.
How shall we construct our program? Artificial intelligence
is a difficult scientific problem, and conceptual advances are
required before programs with human level intelligence can be
devised.
Here are some
considerations.
1. In discussing concrete questions of intelligence, it
is useful to distinguish between a system's algorithms and its
store of facts. While it is possible in principle to consider
the facts as built into the algorithm, making the distinction is
practically essential for studying both human and machine
intelligence. We communicate mainly in facts even when we are
trying to tell each other algorithms.
2. The central problem of AI is, in my opinion, achieving
goals in the {\it commonsense informatic situation}.
RETURN TO SEARLE
The above considerations lead to some further comments on
Searle's contentions.
Here are his axioms.
1. Brains cause minds.
2. Syntax is not sufficient for semantics.
3. Computer programs are entirely defined by their formal, or
syntactic structures.
4. Minds have mental contents; specifically they have semantic
contents.
Conclusion 1. No computer program by itself is sufficient to give
a system a mind. Programs, in short, are not minds, and they are
not by themselves sufficient for having minds.